Constraining pseudo‐label in self‐training unsupervised domain adaptation with energy‐based model

نویسندگان

چکیده

Deep learning is usually data starved, and the unsupervised domain adaptation (UDA) developed to introduce knowledge in labeled source unlabeled target domain. Recently, deep self-training presents a powerful means for UDA, involving an iterative process of predicting then taking confident predictions as hard pseudo-labels retraining. However, are unreliable, thus easily leading deviated solutions with propagated errors. In this paper, we resort energy-based model constrain training sample energy function minimization objective. It can be achieved via simple additional regularization or loss. This framework allows us gain benefits model, while retaining strong discriminative performance following plug-and-play fashion. The convergence property its connection classification expectation investigated. We deliver extensive experiments on most popular large-scale UDA benchmarks image well semantic segmentation demonstrate generality effectiveness.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Model Selection with Nonlinear Embedding for Unsupervised Domain Adaptation

Domain adaptation deals with adapting classifiers trained on data from a source distribution, to work effectively on data from a target distribution. In this paper, we introduce the Nonlinear Embedding Transform (NET) for unsupervised domain adaptation. The NET reduces cross-domain disparity through nonlinear domain alignment. It also embeds the domainaligned data such that similar data points ...

متن کامل

Unsupervised Transductive Domain Adaptation

Supervised learning with large scale labeled datasets and deep layered models has made a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers generalization issues under the presence of a domain shift between the training and the test data distribution. In this regard, unsupervised domain adaptation algorithms have been proposed to directly address t...

متن کامل

Unsupervised Domain Adaptation with Feature Embeddings

Representation learning is the dominant technique for unsupervised domain adaptation, but existing approaches often require the specification of “pivot features” that generalize across domains, which are selected by task-specific heuristics. We show that a novel but simple feature embedding approach provides better performance, by exploiting the feature template structure common in NLP problems.

متن کامل

Unsupervised Domain Adaptation with Residual Transfer Networks

The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source doma...

متن کامل

Unsupervised Multi-Domain Adaptation with Feature Embeddings

Representation learning is the dominant technique for unsupervised domain adaptation, but existing approaches have two major weaknesses. First, they often require the specification of “pivot features” that generalize across domains, which are selected by taskspecific heuristics. We show that a novel but simple feature embedding approach provides better performance, by exploiting the feature tem...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: International Journal of Intelligent Systems

سال: 2022

ISSN: ['1098-111X', '0884-8173']

DOI: https://doi.org/10.1002/int.22930